Goto

Collaborating Authors

 visual memory


WorldMM: Dynamic Multimodal Memory Agent for Long Video Reasoning

Yeo, Woongyeong, Kim, Kangsan, Yoon, Jaehong, Hwang, Sung Ju

arXiv.org Artificial Intelligence

Recent advances in video large language models have demonstrated strong capabilities in understanding short clips. However, scaling them to hours- or days-long videos remains highly challenging due to limited context capacity and the loss of critical visual details during abstraction. Existing memory-augmented methods mitigate this by leveraging textual summaries of video segments, yet they heavily rely on text and fail to utilize visual evidence when reasoning over complex scenes. Moreover, retrieving from fixed temporal scales further limits their flexibility in capturing events that span variable durations. To address this, we introduce WorldMM, a novel multimodal memory agent that constructs and retrieves from multiple complementary memories, encompassing both textual and visual representations. WorldMM comprises three types of memory: episodic memory indexes factual events across multiple temporal scales, semantic memory continuously updates high-level conceptual knowledge, and visual memory preserves detailed information about scenes. During inference, an adaptive retrieval agent iteratively selects the most relevant memory source and leverages multiple temporal granularities based on the query, continuing until it determines that sufficient information has been gathered. WorldMM significantly outperforms existing baselines across five long video question-answering benchmarks, achieving an average 8.4% performance gain over previous state-of-the-art methods, showing its effectiveness on long video reasoning.


Agentic Learner with Grow-and-Refine Multimodal Semantic Memory

Bo, Weihao, Zhang, Shan, Sun, Yanpeng, Wu, Jingjing, Xie, Qunyi, Tan, Xiao, Chen, Kunbin, He, Wei, Li, Xiaofan, Zhao, Na, Wang, Jingdong, Li, Zechao

arXiv.org Artificial Intelligence

MLLMs exhibit strong reasoning on isolated queries, yet they operate de novo -- solving each problem independently and often repeating the same mistakes. Existing memory-augmented agents mainly store past trajectories for reuse. However, trajectory-based memory suffers from brevity bias, gradually losing essential domain knowledge. More critically, even in truly multimodal problem-solving settings, it records only a single-modality trace of past behavior, failing to preserve how visual attention and logical reasoning jointly contributed to the solution. This is fundamentally misaligned with human cognition: semantic memory is both multimodal and integrated, preserving visual and abstract knowledge through coordinated but distinct representational streams. We thus introduce ViLoMem, a dual-stream memory framework that constructs compact, schema-based memory. It separately encodes visual distraction patterns and logical reasoning errors, enabling MLLMs to learn from their successful and failed experiences. Following a grow-and-refine principle, the system incrementally accumulates and updates multimodal semantic knowledge -- preserving stable, generalizable strategies while avoiding catastrophic forgetting. Across six multimodal benchmarks, ViLoMem consistently improves pass@1 accuracy and substantially reduces repeated visual and logical errors. Ablations confirm the necessity of dual-stream memory with explicit distraction--hallucination separation, demonstrating the value of error-aware multimodal memory for lifelong and cross-domain agentic learning. Our project page will be available at https://weihao-bo.github.io/ViLoMeo-page.


Visual Memory for Robust Path Following

Neural Information Processing Systems

In this paper, we present an approach for doing so. Given a demonstration of a path, a first network generates an abstraction of the path. Equipped with this abstraction, a second network then observes the world and decides how to act in order to retrace the path under noisy actuation and a changing environment. The two networks are optimized end-to-end at training time. We evaluate the method in two realistic simulators, performing path following both forwards and backwards. Our experiments show that our approach outperforms both a classical approach to solving this task as well as a number of other baselines.


CogGNN: Cognitive Graph Neural Networks in Generative Connectomics

Soussia, Mayssa, Lin, Yijun, Mahjoub, Mohamed Ali, Rekik, Islem

arXiv.org Artificial Intelligence

Generative learning has advanced network neuroscience, enabling tasks like graph super-resolution, temporal graph prediction, and multimodal brain graph fusion. However, current methods, mainly based on graph neural networks (GNNs), focus solely on structural and topological properties, neglecting cognitive traits. To address this, we introduce the first cognified generative model, CogGNN, which endows GNNs with cognitive capabilities (e.g., visual memory) to generate brain networks that preserve cognitive features. While broadly applicable, we present CogGNN, a specific variant designed to integrate visual input, a key factor in brain functions like pattern recognition and memory recall. As a proof of concept, we use our model to learn connectional brain templates (CBTs), population-level fingerprints from multi-view brain networks. Unlike prior work that overlooks cognitive properties, CogGNN generates CBTs that are both cognitively and structurally meaningful. Our contributions are: (i) a novel cognition-aware generative model with a visual-memory-based loss; (ii) a CBT-learning framework with a co-optimization strategy to yield well-centered, discriminative, cognitively enhanced templates. Extensive experiments show that CogGNN outperforms state-of-the-art methods, establishing a strong foundation for cognitively grounded brain network modeling.


Separating Knowledge and Perception with Procedural Data

Rodríguez-Muñoz, Adrián, Baradad, Manel, Isola, Phillip, Torralba, Antonio

arXiv.org Artificial Intelligence

We train representation models with procedural data only, and apply them on visual similarity, classification, and semantic segmentation tasks without further training by using visual memory -- an explicit database of reference image embeddings. Unlike prior work on visual memory, our approach achieves full compartmentalization with respect to all real-world images while retaining strong performance. Compared to a model trained on Places, our procedural model performs within $1\%$ on NIGHTS visual similarity, outperforms by $8\%$ and $15\%$ on CUB200 and Flowers102 fine-grained classification, and is within $10\%$ on ImageNet-1K classification. It also demonstrates strong zero-shot segmentation, achieving an $R^2$ on COCO within $10\%$ of the models trained on real data. Finally, we analyze procedural versus real data models, showing that parts of the same object have dissimilar representations in procedural models, resulting in incorrect searches in memory and explaining the remaining performance gap.


Long-horizon Visual Instruction Generation with Logic and Attribute Self-reflection

Suo, Yucheng, Ma, Fan, Shen, Kaixin, Zhu, Linchao, Yang, Yi

arXiv.org Artificial Intelligence

Visual instructions for long-horizon tasks are crucial as they intuitively clarify complex concepts and enhance retention across extended steps. Directly generating a series of images using text-to-image models without considering the context of previous steps results in inconsistent images, increasing cognitive load. Additionally, the generated images often miss objects or the attributes such as color, shape, and state of the objects are inaccurate. To address these challenges, we propose LIGER, the first training-free framework for Long-horizon Instruction GEneration with logic and attribute self-Reflection. LIGER first generates a draft image for each step with the historical prompt and visual memory of previous steps. This step-by-step generation approach maintains consistency between images in long-horizon tasks. Moreover, LIGER utilizes various image editing tools to rectify errors including wrong attributes, logic errors, object redundancy, and identity inconsistency in the draft images. Through this self-reflection mechanism, LIGER improves the logic and object attribute correctness of the images. To verify whether the generated images assist human understanding, we manually curated a new benchmark consisting of various long-horizon tasks. Human-annotated ground truth expressions reflect the human-defined criteria for how an image should appear to be illustrative. Experiments demonstrate the visual instructions generated by LIGER are more comprehensive compared with baseline methods.


GraphEQA: Using 3D Semantic Scene Graphs for Real-time Embodied Question Answering

Saxena, Saumya, Buchanan, Blake, Paxton, Chris, Chen, Bingqing, Vaskevicius, Narunas, Palmieri, Luigi, Francis, Jonathan, Kroemer, Oliver

arXiv.org Artificial Intelligence

For example, to answer explore and develop a semantic understanding of an unseen the question "How many chairs are there at the dining environment in order to answer a situated question table?", the agent might rely on commonsense knowledge with confidence. This remains a challenging problem in to understand that dining tables are often associated with robotics, due to the difficulties in obtaining useful semantic dining rooms and dining rooms are usually near the kitchen representations, updating these representations online, and towards the back of a home. A reasonable navigation strategy leveraging prior world knowledge for efficient exploration would involve navigating to the back of the house to and planning. Aiming to address these limitations, we propose locate a kitchen. To ground this search in the current environment, GraphEQA, a novel approach that utilizes real-time however, requires the agent to continually maintain 3D metric-semantic scene graphs (3DSGs) and task relevant an understanding of where it is, memory of where it images as multi-modal memory for grounding Vision-has been, and what further exploratory actions will lead it Language Models (VLMs) to perform EQA tasks in unseen to relevant regions. Finally, the agent needs to observe the environments. We employ a hierarchical planning approach target object(s) and perform visual grounding, in order to that exploits the hierarchical nature of 3DSGs for structured reason about the number of chairs around the dining table, planning and semantic-guided exploration. Through experiments and confidently answer the question correctly.


Reviews: Visual Memory for Robust Path Following

Neural Information Processing Systems

This paper considers the problem of retrace the trajectory from noisy visual observations. The proposed approach firstly takes a sequence of images of the path and encode as sequential memories. In the online setting, a neural network is able to attend to the past memory and take actions accordingly. The algorithm has been verified over two interesting 3D indoor scene dataset. Pros: * The paper is very well-written.


Towards flexible perception with visual memory

Geirhos, Robert, Jaini, Priyank, Stone, Austin, Medapati, Sourabh, Yi, Xi, Toderici, George, Ogale, Abhijit, Shlens, Jonathon

arXiv.org Artificial Intelligence

Training a neural network is a monolithic endeavor, akin to carving knowledge into stone: once the process is completed, editing the knowledge in a network is nearly impossible, since all information is distributed across the network's weights. We here explore a simple, compelling alternative by marrying the representational power of deep neural networks with the flexibility of a database. Decomposing the task of image classification into image similarity (from a pre-trained embedding) and search (via fast nearest neighbor retrieval from a knowledge database), we build a simple and flexible visual memory that has the following key capabilities: (1.) The ability to flexibly add data across scales: from individual samples all the way to entire classes and billion-scale data; (2.) The ability to remove data through unlearning and memory pruning; (3.) An interpretable decision-mechanism on which we can intervene to control its behavior. Taken together, these capabilities comprehensively demonstrate the benefits of an explicit visual memory. We hope that it might contribute to a conversation on how knowledge should be represented in deep vision models -- beyond carving it in "stone" weights.


Brain-Computer Interface Enables Mind Control of Robot Dog

#artificialintelligence

A new peer-reviewed study published in ACS Applied Nano Materials demonstrates a new type of AI-enabled brain-machine interface (BMI) featuring noninvasive biosensor nanotechnology and augmented reality that enables humans to use thoughts to control robots with a high degree of accuracy. Brain-machine interfaces (BMIs) are hands-free and voice-command-free communication systems that allow an individual to operate external devices through brain waves, with vast potential for future robotics, bionic prosthetics, neurogaming, electronics, and autonomous vehicles. The artificial intelligence (AI) renaissance with the improved pattern-recognition capabilities of deep neural networks is contributing to the acceleration of advances in brain-machine interfaces, also known as brain-computing interfaces (BCIs). AI deep learning helps find the relevant signals in the noisy brain activity data. The neural activity of the human brain is recorded using sensors.